Financial Services
The best budgeting apps for 2025
Managing your finances doesn't have to be a headache -- especially with the right budgeting app at your fingertips. Whether you're trying to track everyday spending, save for a big purchase or just keep a closer eye on your subscriptions, there's an app that can help. With Mint shutting down, plenty of users have been looking for the best budget apps to replace it, and luckily there are plenty of solid alternatives. From AI-powered spending trackers to apps that break down your expenses into easy-to-follow categories, the best budgeting tools help you take control of your money without the hassle of spreadsheets. Some focus on automating savings, while others give you a deep dive into your finances with powerful analytics and custom reporting. If you're still searching for the right Mint alternative, check out our guide to the best budgeting apps to replace Mint to find the best fit for your needs. If you're not sure where to start, we've rounded up the top budgeting apps to help you track spending, save smarter, and stick to your financial goals. No pun intended, but what I like about Quicken Simplifi is its simplicity. Whereas other budgeting apps try to distinguish themselves with dark themes and customizable emoji, Simplifi has a clean user interface, with a landing page that you just keep scrolling through to get a detailed overview of all your stats.
A Synthesized LLM Multi-Agent System with Conceptual Verbal Reinforcement for Enhanced Financial Decision Making
Large language models (LLMs) have shown potential in complex financial tasks, but sequential financial decision-making remains challenging due to the volatile environment and the need for intelligent risk management. While LLM-based agent systems have achieved impressive returns, optimizing multi-source information synthesis and decision-making through timely experience refinement is underexplored.
BeanCounter: A low-toxicity, large-scale, and open dataset of business-oriented text
Many of the recent breakthroughs in language modeling have resulted from scaling effectively the same model architecture to larger datasets. In this vein, recent work has highlighted performance gains from increasing training dataset size and quality, suggesting a need for novel sources of large-scale datasets. In this work, we introduce BeanCounter, a public dataset consisting of more than 159B tokens extracted from businesses' disclosures. We show that this data is indeed novel: less than 0.1% of BeanCounter appears in Common Crawl-based datasets and the data is an order of magnitude larger than datasets relying on similar sources. Given the data's provenance, we hypothesize that BeanCounter is comparatively more factual and less toxic than web-based datasets. Exploring this hypothesis, we find that many demographic identities occur with similar prevalence in BeanCounter but with significantly less toxic context relative to other datasets. To demonstrate the utility of BeanCounter, we evaluate and compare two LLMs continually pre-trained on BeanCounter with their base models. We find an 18-33% reduction in toxic generation and improved performance within the finance domain for the continually pretrained models. Collectively, our work suggests that BeanCounter is a novel source of low-toxicity and high-quality domain-specific data with sufficient scale to train multi-billion parameter LLMs.
Alleviating LLM-based Generative Retrieval Hallucination in Alipay Search
Shen, Yedan, Wu, Kaixin, Ding, Yuechen, Wen, Jingyuan, Liu, Hong, Zhong, Mingjie, Lin, Zhouhan, Xu, Jia, Mo, Linjian
Generative retrieval (GR) has revolutionized document retrieval with the advent of large language models (LLMs), and LLM-based GR is gradually being adopted by the industry. Despite its remarkable advantages and potential, LLM-based GR suffers from hallucination and generates documents that are irrelevant to the query in some instances, severely challenging its credibility in practical applications. We thereby propose an optimized GR framework designed to alleviate retrieval hallucination, which integrates knowledge distillation reasoning in model training and incorporate decision agent to further improve retrieval precision. Specifically, we employ LLMs to assess and reason GR retrieved query-document (q-d) pairs, and then distill the reasoning data as transferred knowledge to the GR model. Moreover, we utilize a decision agent as post-processing to extend the GR retrieved documents through retrieval model and select the most relevant ones from multi perspectives as the final generative retrieval result. Extensive offline experiments on real-world datasets and online A/B tests on Fund Search and Insurance Search in Alipay demonstrate our framework's superiority and effectiveness in improving search quality and conversion gains.
FinCon: A Synthesized LLM Multi-Agent System with Conceptual Verbal Reinforcement for Enhanced Financial Decision Making
Large language models (LLMs) have demonstrated notable potential in conducting complex tasks and are increasingly utilized in various financial applications. However, high-quality sequential financial investment decision-making remains challenging. These tasks require multiple interactions with a volatile environment for every decision, demanding sufficient intelligence to maximize returns and manage risks. Although LLMs have been used to develop agent systems that surpass human teams and yield impressive investment returns, opportunities to enhance multi-source information synthesis and optimize decision-making outcomes through timely experience refinement remain unexplored. Here, we introduce FinCon, an LLM-based multi-agent framework tailored for diverse financial tasks.
AI Agents: Evolution, Architecture, and Real-World Applications
Artificial Intelligence (AI) has evolved dramatically over the past decade, transitioning from specialized systems designed for narrow tasks to increasingly sophisticated architectures capable of autonomous operation across diverse domains. Among these advancements, AI agents represent a particularly significant development, embodying a paradigm shift in how intelligent systems interact with their environments, make decisions, and achieve complex goals. Unlike traditional AI systems that execute predefined algorithms within constraints, AI agents possess the capacity to autonomously perceive, reason, and act, often adapting their behavior based on environmental feedback and accumulated experience. The concept of an AI agent refers to a system or program that is capable of autonomously performing tasks on behalf of a user or another system by designing its workflow and utilizing available tools. These agents can encompass a wide range of functionalities beyond natural language processing, including decision making, problem solving, interacting with external environments, and executing actions. As Kapoor et al. (2024) note in their analysis of agent benchmarks, the development of AI agents represents an exciting new research direction with significant implications for real-world applications across numerous industries. The evolution of AI agents has been accelerated by recent breakthroughs in large language models (LLMs), which have provided a foundation for more sophisticated reasoning capabilities. Modern AI agents leverage these advanced language models as core components, augmenting them with specialized modules for memory, planning, tool use, and environmental interaction. This integration enables agents to perform complex tasks that would be challenging or impossible for traditional AI systems, from reconciling financial statements to providing step-by-step instructions for field technicians based on contextual understanding of product information.
ZiGong 1.0: A Large Language Model for Financial Credit
Lei, Yu, Wang, Zixuan, Liu, Chu, Wang, Tongyao
Large Language Models (LLMs) have demonstrated strong performance across various general Natural Language Processing (NLP) tasks. However, their effectiveness in financial credit assessment applications remains suboptimal, primarily due to the specialized financial expertise required for these tasks. To address this limitation, we propose ZiGong, a Mistral-based model enhanced through multi-task supervised fine-tuning. To specifically combat model hallucination in financial contexts, we introduce a novel data pruning methodology. Our approach utilizes a proxy model to score training samples, subsequently combining filtered data with original datasets for model training. This data refinement strategy effectively reduces hallucinations in LLMs while maintaining reliability in downstream financial applications. Experimental results show our method significantly enhances model robustness and prediction accuracy in real-world financial scenarios.